DE eng

Search in the Catalogues and Directories

Hits 1 – 7 of 7

1
"We will Reduce Taxes" - Identifying Election Pledges with Language Models ...
BASE
Show details
2
SemEval 2021 Task 12: Learning with Disagreement ...
BASE
Show details
3
SemEval-2021 Task 12: Learning with Disagreements
Uma, Alexandra; Fornaciari, Tommaso; Dumitrache, Anca. - : Association for Computational Linguistics, 2021
BASE
Show details
4
We Need to Consider Disagreement in Evaluation
Basile, Valerio; Fell, Michael; Fornaciari, Tommaso. - : Association for Computational Linguistics, 2021. : country:USA, 2021. : place:Stroudsburg, PA, 2021
BASE
Show details
5
Fake opinion detection: how similar are crowdsourced datasets to real data? [<Journal>]
Fornaciari, Tommaso [Verfasser]; Cagnina, Leticia [Verfasser]; Rosso, Paolo [Verfasser].
DNB Subject Category Language
Show details
6
Fake Opinion Detection: How Similar are Crowdsourced Datasets to Real Data?
Fornaciari, Tommaso; Cagnina, Leticia; Rosso, Paolo. - : Springer-Verlag, 2020
BASE
Show details
7
Identifying fake Amazon reviews as learning from crowds
Fornaciari, Tommaso; Poesio, Massimo. - : Association for Computational Linguistics, 2014
Abstract: Customers who buy products such as books online often rely on other customers reviews more than on reviews found on specialist magazines. Unfortunately the confidence in such reviews is often misplaced due to the explosion of so-called sock puppetry-Authors writing glowing reviews of their own books. Identifying such deceptive reviews is not easy. The first contribution of our work is the creation of a collection including a number of genuinely deceptive Amazon book reviews in collaboration with crime writer Jeremy Duns, who has devoted a great deal of effort in unmasking sock puppeting among his colleagues. But there can be no certainty concerning the other reviews in the collection: All we have is a number of cues, also developed in collaboration with Duns, suggesting that a review may be genuine or deceptive. Thus this corpus is an example of a collection where it is not possible to acquire the actual label for all instances, and where clues of deception were treated as annotators who assign them heuristic labels. A number of approaches have been proposed for such cases; we adopt here the 'learning from crowds' approach proposed by Raykar et al. (2010). Thanks to Duns' certainly fake reviews, the second contribution of this work consists in the evaluation of the effectiveness of different methods of annotation, according to the performance of models trained to detect deceptive reviews. © 2014 Association for Computational Linguistics.
Keyword: P Philology. Linguistics; QA75 Electronic computers. Computer science
URL: http://repository.essex.ac.uk/14591/
https://aclanthology.org/volumes/E14-1/
http://repository.essex.ac.uk/14591/1/document.pdf
BASE
Hide details

Catalogues
0
0
0
0
1
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
6
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern